It has been way too long (two years!) since I started this series, and I figure it's time to get back into it. For a refresher, here's part 1 : What is intelligence? and part 2: Why Artificial Intelligence?
What follows is by no means a definitive history of artificial intelligence. In fact, Wikipedia already has a very good entry on the history of artificial intelligence. Instead, this is a fairly brief history, with large gaps. I'm a bit more free to editorialize than Wikipedia.
distant past
Humanity has long searched for ways to get the benefits of human intelligence, without the requirements for actual humans. For most of human history, the solution was slavery - treating other human beings as if they were machines, and desiring only a small fraction of their mental potential.
Not only is slavery evil, it is also inefficient - slaves still require water, food, shelter, clothing, none of which is free. Automation of even the simplest sort is so vastly more efficient, along any metric one uses for comparison, that as soon as a task could be automated, it was.
The desire for intelligence in the inanimate has persisted through recorded history. The Golem is a fairly early example, and a cautionary tale as well. From Wikipedia:
The most famous golem narrative involves Judah Loew ben Bezalel, the late 16th century chief rabbi of Prague, also known as the Maharal, who reportedly created a golem to defend the Prague ghetto from anti-Semitic attacks and pogroms. Depending on the version of the legend, the Jews in Prague were to be either expelled or killed under the rule of Rudolf II, the Holy Roman Emperor. To protect the Jewish community, the rabbi constructed the Golem out of clay from the banks of the Vltava river, and brought it to life through rituals and Hebrew incantations. As this golem grew, it became increasingly violent, killing gentiles and spreading fear. A different story tells of a golem that fell in love, and when rejected, became the violent monster seen in most accounts. Some versions have the golem eventually turning on its creator or attacking other Jews.The Zombie - no, not the Night of the Living Dead type, the Haitian voodoo kind - is a similar cultural expression of this desire for (partial) human intelligence animating human bodies. In both the cases of the Golem (an artificial creature) and the Haitian Zombie (a person enslaved through artificial chemical means and cultural expectations) the desire for control of a portion of human-like intelligence to perform tasks is evident.
Mary Shelley's Frankenstein is another example - a not-human (because he's dead) is re-animated under the belief that the mind could still function. Once again this fictional creation of artificial intelligence is a cautionary tale, as the creature turns on his creator. As an aside, it is also the inspiration for the subtitle to this blog.
In the late 18th century, the Turk chess-playing machine was a fraud that played off the desire for automated intelligence. A skilled chess player was hidden inside an elaborately constructed table with attached mechanical "opponent", operated by the player inside via levers and gears. So clever was the mechanism for hiding the chess master inside, and so great the desire (on the part of the audience) to believe that it was possible to automate intelligence with levers and gears, that the fraud persisted for decades.
the 1800s
In 1837, Charles Babbage developed the idea of a programmable computer, and a partial first sample of the Analytical Engine, which would have been the first Turing-complete computer had it been finished. Lady Ada Byron became the first computer programmer by writing a program to compute Bernoulli numbers on the Analytical engine. This was an important step, as it proved that some tasks which were once thought to be purely mental in nature could in fact be performed by machines - that calculation could be automated.
Then in 1854, George Boole developed a variation on elementary algebra called Boolean Algebra, operating solely on "truth values" of 0 or 1 rather than on all numbers. This Boolean algebra is the basis for all digital logic.
the 1930s
In 1937, Claude Shannon published his groundbreaking master's thesis, A Symbolic Analysis of Relay and Switching Circuits (available here). In that paper he showed that it was possible to use electromechanical relays to solve Boolean Algebra problems. In 1948 he introduced the idea of the "bit" as the smallest unit of information (among other ideas like information entropy) in the paper A Mathematical Theory of Communication (available here), which is itself the foundation of what we today call Information Theory. He showed that electronic circuits could perform logical operations, and that extraordinarily complex computations could be performed with electronics. Once again, what once had required the intelligence of a human could now be automated.
In 1936, Alan Turing described a thought experiment representing an automatic computing machine; this thought experiment has since become known as the "Turing Machine". In 1948 Turing described the idea as:
...an infinite memory capacity obtained in the form of an infinite tape marked out into squares, on each of which a symbol could be printed. At any moment there is one symbol in the machine; it is called the scanned symbol. The machine can alter the scanned symbol and its behavior is in part determined by that symbol, but the symbols on the tape elsewhere do not affect the behavior of the machine. However, the tape can be moved back and forth through the machine, this being one of the elementary operations of the machine. Any symbol on the tape may therefore eventually have an innings.Although the physical details are different, the description is identical to the operation of any computer today. Instead of a physical tape being fed through the machine, a memory address is polled and the resulting "symbol" consists of a pattern of low and high voltages on wires, what we think of as the Zeros and Ones in a byte.
If a Turing machine is capable of emulating any other Turing machine, then it is considered a Universal Turing Machine. This led to the idea of a stored-program computer - the stored program being the translation software operating in the background. Indeed, most computers today can emulate pretty much any other computer, and the process usually only requires the appropriate software.
We're not done with Alan Turing yet. Besides groundbreaking work in computational theory, he also turned his attention to artificial intelligence. He devised a thought experiment to determine whether an artificial device was actually intelligent. The Turing Test consists of a human, an AI, and a human judge. The judge has a conversation with both the AI and the human through a teletype arrangement - what we would recognize today as a chat window - and tries to decide which one is the human. If the judge can't figure it out solely from the conversation in chat, then the AI is considered intelligent, according to the Turing Test. It isn't a perfect test, but at least it was a first attempt to evaluate the quality of our efforts towards developing artificial intelligence.
So to recap: there is a historical desire for at least (or only) a portion of human intelligence to perform tasks; it was shown that the task of calculation can be automated, mechanically; calculation and complex logical operations can also be automated with electronics; calculating machines with stored programs can emulate (or simulate) other calculating machines; and it is possible to test a machine for intelligence, at least to a first-order approximation.
the birth of AI as a field of study
The 1956 Dartmouth Conference is generally regarded as the birth of AI. Indeed, the term "artificial intelligence" was coined for the conference and came to be accepted as the name for the new field of study at that conference. The Dartmouth Conference led to an explosion of work in the field that continued until about 1974.
The Dartmouth Conference is a pretty good place to stop this section. The next three parts of this series will look at three approaches to artificial intelligence: neural networks, fuzzy cognitive maps, and genetic algorithms.
No comments:
Post a Comment